Maximising Sensitivity in a Spiking Network
نویسندگان
چکیده
We use unsupervised probabilistic machine learning ideas to try to explain the kinds of learning observed in real neurons, the goal being to connect abstract principles of self-organisation to known biophysical processes. For example, we would like to explain Spike TimingDependent Plasticity (see [5,6] and Figure 3A), in terms of information theory. Starting out, we explore the optimisation of a network sensitivity measure related to maximising the mutual information between input spike timings and output spike timings. Our derivations are analogous to those in ICA, except that the sensitivity of output timings to input timings is maximised, rather than the sensitivity of output ‘firing rates’ to inputs. ICA and related approaches have been successful in explaining the learning of many properties of early visual receptive fields in rate coding models, and we are hoping for similar gains in understanding of spike coding in networks, and how this is supported, in principled probabilistic ways, by cellular biophysical processes. For now, in our initial simulations, we show that our derived rule can learn synaptic weights which can unmix, or demultiplex, mixed spike trains. That is, it can recover independent point processes embedded in distributed correlated input spike trains, using an adaptive single-layer feedforward spiking network. 1 Maximising Sensitivity. In this section, we will follow the structure of the ICA derivation [4] in developing the spiking theory. We cannot claim, as before, that this gives us an information maximisation algorithm, for reasons that we will delay addressing until Section 3. But for now, to first develop our approach, we will explore an interim objective function called sensitivity which we define as the log Jacobian of how input spike timings affect output spike timings. 1.1 How to maximise the effect of one spike timing on another. Consider a spike in neuron j at time tl that has an effect on the timing of another spike in neuron i at time tk. The neurons are connected by a weight wij . We use i and j to index neurons, and k and l to index spikes, but sometimes for convenience we will use spike indices in place of neuron indices. For example, wkl, the weight between an input spike l and an output spike k, is naturally understood to be just the corresponding wij .
منابع مشابه
Learning to Map Input-Output Spike Patterns by Reward-Modulated STDP
Reward-modulated learning rules for spiking neural networks have emerged, that have been demonstrated to solve a wide range of reinforcement learning tasks. Despite this, few attempts have been made in teaching a spiking network to learn target spike trains. Here, we apply a reward-maximising learning rule to teach a spiking neural network to map between multiple input patterns and single-spike...
متن کاملStimulus sensitivity of a spiking neural network model
Some recent papers relate the criticality of complex systems to their maximal capacity of information processing. In the present paper, we consider high dimensional point processes, known as age-dependent Hawkes processes, which have been used to model spiking neural networks. Using mean-field approximation, the response of the network to a stimulus is computed and we provide a notion of stimul...
متن کاملModel-based reinforcement learning with spiking neurons
Behavioural and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. An increasingly explicit set of neuroscientific ideas has been established for habit formation, whereas goal-directed control has only recently started to attract researchers’ attention. While using functional magnetic resonance imaging to ...
متن کاملSpiking Neurons by Sparse Temporal Coding and Multilayer Rbf Networks
We demonstrate that spiking neural networks encoding information in the timing of single spikes are capable of computing and learning clusters from realistic data. We show how a spiking neural network based on spike-time coding and Hebbian learning can successfully perform unsupervised clustering on real-world data, and we demonstrate how temporal synchrony in a multi-layer network can induce h...
متن کاملUnsupervised Classification of Complex Clusters in Networks of Spiking Neurons
For unsupervised clustering in a network of spiking neurons we develop a temporal encoding of continuously valued data to obtain arbitrary clustering capacity and precision with an efficient use of neurons. Input variables are encoded independently in a population code by neurons with 1-dimensional graded and overlapping sensitivity profiles. Using a temporal Hebbian learning rule, the network ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2004